Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
9th International Conference on Future Data and Security Engineering, FDSE 2022 ; 1688 CCIS:462-476, 2022.
Article in English | Scopus | ID: covidwho-2173960

ABSTRACT

Thousands of infections, hundreds of deaths every day - these are numbers that speak the current serious status, numbers that each of us is no longer unfamiliar with in the current context, the context of the raging epidemic - Coronavirus disease epidemic. Therefore, we need solutions and technologies to fight the epidemic promptly and quickly to prevent or reduce the effect of the epidemic. Numerous studies have warned that if we contact an infected person within a distance of fewer than two meters, it can be considered a high risk of infecting Coronavirus. To detect a contact distance shorter than two meters and provides warnings to violations in monitoring systems based on a camera, we present an approach to solving two problems, including detecting objects - here are humans and calculating the distance between objects using Chessboard and bird's eye perspective. We have leveraged the pre-trained InceptionV2 model, a famous convolutional neural network for object detection, to detect people in the video. Also, we propose to use a perspective transformation algorithm for the distance calculation converting pixels from the camera perspective to a bird's eye view. Then, we choose the minimum distance from the distance in the determined field to the distance in pixels and calculate the distance violation based on the bird's eye view, with camera calibration and minimum distance selection process based on field distance. The proposed method is tested in some scenarios to provide warnings of social distancing violations. The work is expected to generate a safe area providing warnings to protect employees in administrative environments with a high risk of contacting numerous people. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

2.
Computer Vision and Image Understanding ; 226, 2023.
Article in English | Scopus | ID: covidwho-2130572

ABSTRACT

Periocular is one of the promising biometric traits for human recognition. It encompasses a surrounding area of eyes that includes eyebrows, eyelids, eyelashes, eye-folds, eyebrows, eye shape, and skin texture. Its relevance is more emphasize during the COVID-19 pandemic due to the masked faces. So, this article presents a detailed review of periocular biometrics to understand its current state. The paper first discusses the various face and periocular techniques, specially designed to recognize humans wearing a face mask. Then, different aspects of periocular biometrics are reviewed: (a) the anatomical cues present in the periocular region useful for recognition, (b) the various feature extraction and matching techniques developed, (c) recognition across different spectra, (d) fusion with other biometric modalities (face or iris), (e) recognition on mobile devices, (f) its usefulness in other applications, (g) periocular datasets, and (h) competitions organized for evaluating the efficacy of this biometric modality. Finally, various challenges and future directions in the field of periocular biometrics are presented. © 2022

3.
6th International Conference on System-Integrated Intelligence, SysInt 2022 ; 546 LNNS:116-125, 2023.
Article in English | Scopus | ID: covidwho-2048151

ABSTRACT

The global COVID-19 pandemic has stimulated the use of disinfection robots: in September 2021, following a European Commission’s action, 200 disinfection robots were delivered to European Hospitals. UV-C light is a common disinfection method, however, direct exposure to UV-C radiation is harmful and disinfection can be operated only in areas strictly forbidden to human personnel. We believe more advanced safety mechanisms are needed to increase the operational flexibility and safety level. We propose a safety mechanism based on vision and artificial intelligence, optimised for execution on mobile robot platforms. It analyses in real-time four video streaming and disables UV-C lamps when needed. Concerning other detection methods, it has a relatively wider and deeper range, and the capability to operate in a dynamic environment. We present the development of the method with a performance comparison of different implementation solutions, and an on-field evaluation through integration on a mobile disinfection robot. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

4.
Journal of Image and Graphics ; 27(6):1723-1742, 2022.
Article in Chinese | Scopus | ID: covidwho-1903894

ABSTRACT

Public security and social governance is essential to national development nowadays. It is challenged to prevent large-scale riots in communities and various city crimes for spatial and times caled social governance in corona virus disease 2019(Covid-19) like highly accurate human identity verification, highly efficient human behavior analysis and crowd flow track and trace. The core of the challenge is to use computer vision technologies to extract visual information in complex scenarios and to fully express, identify and understand the relationship between human behavior and scenes to improve the degree of social administration and governance. Complex scenarios oriented visual technologies recognition can improve the efficiency of social intelligence and accelerate the process of intelligent social governance. The main challenge of human recognition is composed of three aspects as mentioned below: 1) the diversity attack derived from mask occlusion attack, affecting the security of human identity recognition;2) the large span of time and space information has affected the accuracy of multiple ages oriented face recognition (especially tens of millions of scales retrieval);3) the complex and changeable scenarios are required for the high robustness of the system and adapt to diverse environments. Therefore, it is necessary to facilitate technologies of remote human identity verification related to the high degree of security, face recognition accuracy, human behavior analysis and scene semantic recognition. The motion analysis of individual behavior and group interaction trend are the key components of complex scenarios based human visual contexts. In detail, individual behavior analysis mainly includes video-based pedestrian re-recognition and video-based action recognition. The group interaction recognition is mainly based on video question-and-answer and dialogue. Video-based network can record the multi-source cameras derived individuals/groups image information. Multi-camera based human behavior research of group segmentation, group tracking, group behavior analysis and abnormal behavior detection. However, it is extremely complex that the individual behavior/group interaction is recorded by multiple cameras in real scenarios, and it is still a great challenge to improve the performance of multi-camera and multi-objective behavior recognition through integrated modeling of real scene structure, individual behavior and group interaction. The video-based network recognition of individual and group behavior mainly depends on visual information in related to scene, individual and group captured. Nonetheless, complex scenarios based individual behavior analysis and group interaction recognition require human knowledge and prior knowledge without visual information in common. Specifically, a crowd sourced data application has improved visual computing performance and visual question-and-answer and dialogue and visual language navigation. The inherited knowledge in crowd sourced data can develop a data-driven machine learning model for comprehensive knowledge and prior applications in individual behavior analysis and group interaction recognition, and establish a new method of data-driven and knowledge-guided visual computing. In addition, the facial expression behavior can be recognized as the human facial micro-motions like speech the voice of language. Speech emotion recognition can capture and understand human emotions and beneficial to support the learning mode of human-machine collaboration better. It is important for research to get deeper into the technology of human visual recognition. Current researches have been focused on human facial expression recognition, speech emotion recognition, expression synthesis, and speech emotion synthesis. We carried out about the contexts of complex scenarios based real-time human identification, individual behavior and group interaction understanding analysis, visual speech emotion recognition and synthesis, comprehensive utilization of knowledge and a priori mode of ma hine learning. The research and application scenarios for the visual ability is facilitated for complex scenarios. We summarize the current situations, and predict the frontier technologies and development trends. The human visual recognition technology will harness the visual ability to recognize relationship between humans, behavior and scenes. It is potential to improve the capability of standard data construction, model computing resources, and model robustness and interpretability further. © 2022, Editorial Office of Journal of Image and Graphics. All right reserved.

SELECTION OF CITATIONS
SEARCH DETAIL